13 research outputs found

    Integration of multimodal data based on surface registration

    Get PDF
    The paper proposes and evaluates a strategy for the alignment of anatomical and functional data of the brain. The method takes as an input two different sets of images of a same patient: MR data and SPECT. It proceeds in four steps: first, it constructs two voxel models from the two image sets; next, it extracts from the two voxel models the surfaces of regions of interest; in the third step, the surfaces are interactively aligned by corresponding pairs; finally a unique volume model is constructed by selectively applying the geometrical transformations associated to the regions and weighting their contributions. The main advantages of this strategy are (i) that it can be applied retrospectively, (ii) that it is tri-dimensional, and (iii) that it is local. Its main disadvantage with regard to previously published methods it that it requires the extraction of surfaces. However, this step is often required for other stages of the multimodal analysis such as the visualization and therefore its cost can be accounted in the global cost of the process.Postprint (published version

    A Fast hierarchical traversal strategy for multimodal visualization

    Get PDF
    In the last years there is a growing demand of multimodal medical rendering systems able to visualize simultaneously data coming from different sources. This paper addresses the Direct Volume Rendering (DVR) of aligned multimodal data in medical applications. Specifically, it proposes a hierarchical representation of the multimodal data set based on the construction of a Fusion Decision Tree (FDT) that, together with a run-length encoding of the non-empty data, provides means of efficiently accessing to the data. Three different implementations of these structures are proposed. The simulations results show that the traversal of the data is fast and that the method is suitable when interactive modifications of the fusion parameters are required.Postprint (published version

    Speeding up rendering of hybrid surface and volume models

    Get PDF
    Hybrid rendering of volume and polygonal model is an interesting feature of visualization systems, since it helps users to better understand the relationships between internal structures of the volume and fitted surfaces as well as external surfaces. Most of the existing bibliography focuses at the problem of correctly integrating in depth both types of information. The rendering method proposed in this paper is built on these previous results. It is aimed at solving a different problem: how to efficiently access to selected information of a hybrid model. We propose to construct a decision tree (the Rendering Decision Tree), which together with an auxiliary run-length representation of the model avoids visiting unselected surfaces and internal regions during a traversal of the model.Postprint (published version

    Rendering techniques for multimodal data

    Get PDF
    Many different direct volume rendering methods have been developed to visualize 3D scalar fields on uniform rectilinear grids. However, little work has been done on rendering simultaneously various properties of the same 3D region measured with different registration devices or at different instants of time. The demand for this type of visualization is rapidly increasing in scientific applications such as medicine in which the visual integration of multiple modalities allows a better comprehension of the anatomy and a perception of its relationships with activity. This paper presents different strategies of Direct Multimodal Volume Rendering (DMVR). It is restricted to voxel models with a known 3D rigid alignment transformation. The paper evaluates at which steps of the render-ing pipeline must the data fusion be realized in order to accomplish the desired visual integration and to provide fast re-renders when some fusion parameters are modified. In addition, it analyzes how existing monomodal visualization al-gorithms can be extended to multiple datasets and it compares their efficiency and their computational cost.Postprint (published version

    Design of a multimodal rendering system

    Get PDF
    This paper addresses the rendering of aligned regular multimodal datasets. It presents a general framework of multimodal data fusion that includes several data merging methods. We also analyze the requirements of a rendering system able to provide these different fusion methods. On the basis of these requirements, we propose a novel design for a multimodal rendering system. The design has been implemented and proved showing to be efficient and flexible.Postprint (published version

    Integration, modeling and visualization of multimodal data of the human brain

    No full text
    The purpose of this report is to analyze the current methods of data integration in Computer-Assisted Neurosurgery applications. Neurological studies require the integration of anatomical data registered with CT Computed Tomography, MRI Magnetic Resonance and MRA Magnetic Resonance Angiography with functional data from Nuclear Tomographies, PET and SPECT along with fMRI Functional Magnetic Resonance and MEG Magnetoencelography. The first section of the document presents the physical model corresponding to the human brain, it describes the anatomy, the physiology and the pathologies that need the assistance of imaging modalities. The second section analyzes the type of data provided by the devices above enumerated, their values, type and range, the physical properties that they measure and their resolution. The impact of the registration on the patients is also evaluated taking into account the invasivity of the technique and its harmfulness. The third and the fourth sections focus at the integration itself. First, a general overview of the integration process is given. Next, different integration strategies are presented and compared. The fifth section addresses the modelling of integrated data. In the sixth section, the error of each step of the integration is analyzed. Finally, several strategies for the joint visualization of several data types are discussed. In the conclusions, a global evaluation of the different methods is presented, outlining their lacks and their advantages

    Integration of multimodal data based on surface registration

    No full text
    The paper proposes and evaluates a strategy for the alignment of anatomical and functional data of the brain. The method takes as an input two different sets of images of a same patient: MR data and SPECT. It proceeds in four steps: first, it constructs two voxel models from the two image sets; next, it extracts from the two voxel models the surfaces of regions of interest; in the third step, the surfaces are interactively aligned by corresponding pairs; finally a unique volume model is constructed by selectively applying the geometrical transformations associated to the regions and weighting their contributions. The main advantages of this strategy are (i) that it can be applied retrospectively, (ii) that it is tri-dimensional, and (iii) that it is local. Its main disadvantage with regard to previously published methods it that it requires the extraction of surfaces. However, this step is often required for other stages of the multimodal analysis such as the visualization and therefore its cost can be accounted in the global cost of the process

    Visual clues in multimodal rendering

    Get PDF
    This report presents a comparative analysis of different multimodal rendering methods proposed in [FPT02]. It shows how relevant features of a property as well as relationships between data can be outlined by choosing an appropriate fusion modality. In addition, it analyses the visual clues that can be provided by using different shading models and by enabling rendering parameters such as depth cueing and light source attenuation. The simulations are performed on the software Hipo whose design is described in [PTF02]

    A Fast hierarchical traversal strategy for multimodal visualization

    No full text
    In the last years there is a growing demand of multimodal medical rendering systems able to visualize simultaneously data coming from different sources. This paper addresses the Direct Volume Rendering (DVR) of aligned multimodal data in medical applications. Specifically, it proposes a hierarchical representation of the multimodal data set based on the construction of a Fusion Decision Tree (FDT) that, together with a run-length encoding of the non-empty data, provides means of efficiently accessing to the data. Three different implementations of these structures are proposed. The simulations results show that the traversal of the data is fast and that the method is suitable when interactive modifications of the fusion parameters are required

    Speeding up rendering of hybrid surface and volume models

    No full text
    Hybrid rendering of volume and polygonal model is an interesting feature of visualization systems, since it helps users to better understand the relationships between internal structures of the volume and fitted surfaces as well as external surfaces. Most of the existing bibliography focuses at the problem of correctly integrating in depth both types of information. The rendering method proposed in this paper is built on these previous results. It is aimed at solving a different problem: how to efficiently access to selected information of a hybrid model. We propose to construct a decision tree (the Rendering Decision Tree), which together with an auxiliary run-length representation of the model avoids visiting unselected surfaces and internal regions during a traversal of the model
    corecore